# Multilingual Pretraining
Umt5 Xxl
Apache-2.0
UMT5 is a multilingual text generation model pretrained on the mC4 multilingual corpus, supporting 107 languages and optimized for language balance using the UniMax sampling strategy
Large Language Model
Transformers Supports Multiple Languages

U
google
4,449
32
Umt5 Xl
Apache-2.0
A multilingual text generation model pretrained on the mC4 multilingual corpus, supporting 107 languages
Large Language Model
Transformers Supports Multiple Languages

U
google
1,049
17
Persian Xlm Roberta Large
A QA model fine-tuned on the Persian QA dataset PQuAD based on the XLM-RoBERTA multilingual pretrained model
Question Answering System
Transformers

P
pedramyazdipoor
77
3
Xlm Roberta Base Finetuned Panx Fr
MIT
French token classification model fine-tuned on the xtreme dataset based on XLM-RoBERTa-base
Sequence Labeling
Transformers

X
andreaschandra
16
0
Xlm Roberta Base Finetuned Panx De
MIT
German token classification model fine-tuned on the xtreme dataset based on XLM-RoBERTa-base
Sequence Labeling
Transformers

X
novarac23
17
0
Xlm Roberta Large Finetuned Conll02 Spanish
Named entity recognition model fine-tuned on the Spanish CoNLL-2002 dataset based on XLM-RoBERTa-large
Sequence Labeling Supports Multiple Languages
X
FacebookAI
244
2
Xlm Roberta Large
MIT
XLM-RoBERTa is a multilingual model pretrained on 2.5TB of filtered CommonCrawl data across 100 languages, trained with a masked language modeling objective.
Large Language Model Supports Multiple Languages
X
FacebookAI
5.3M
431
Featured Recommended AI Models